A Fast Randomized Incremental Gradient Method for Decentralized Nonconvex Optimization
نویسندگان
چکیده
In this article, we study decentralized nonconvex finite-sum minimization problems described over a network of nodes, where each node possesses local batch data samples. context, analyze single-timescale randomized incremental gradient method, called GT-SAGA . is computationally efficient as it evaluates one component per iteration and achieves provably fast robust performance by leveraging node-level variance reduction network-level tracking. For general smooth problems, show the almost sure mean-squared convergence to first-order stationary point further describe regimes practical significance, outperforms existing approaches topology-independent complexity, respectively. When global function satisfies Polyak–Łojaciewisz condition, that exhibits linear an optimal solution in expectation interest topology independent improves upon methods. Numerical experiments are included highlight main aspects settings.
منابع مشابه
Fast Incremental Method for Nonconvex Optimization
We analyze a fast incremental aggregated gradient method for optimizing nonconvex problems of the form minx ∑ i fi(x). Specifically, we analyze the Saga algorithm within an Incremental First-order Oracle framework, and show that it converges to a stationary point provably faster than both gradient descent and stochastic gradient descent. We also discuss a Polyak’s special class of nonconvex pro...
متن کاملOn Nonconvex Decentralized Gradient Descent
Consensus optimization has received considerable attention in recent years. A number of decentralized algorithms have been proposed for convex consensus optimization. However, on consensus optimization with nonconvex objective functions, our understanding to the behavior of these algorithms is limited. When we lose convexity, we cannot hope for obtaining globally optimal solutions (though we st...
متن کاملA Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization
We analyze stochastic gradient algorithms for optimizing nonconvex, nonsmooth finite-sum problems. In particular, the objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a possibly non-differentiable but convex component. We propose a proximal stochastic gradient algorithm based on variance reduction, called ProxSVRG+. The algorithm is ...
متن کاملAn optimal randomized incremental gradient method
In this paper, we consider a class of finite-sum convex optimization problems whose objective function is given by the summation of m (≥ 1) smooth components together with some other relatively simple terms. We first introduce a deterministic primal-dual gradient (PDG) method that can achieve the optimal black-box iteration complexity for solving these composite optimization problems using a pr...
متن کاملAsynchronous Parallel Stochastic Gradient for Nonconvex Optimization
Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Transactions on Automatic Control
سال: 2022
ISSN: ['0018-9286', '1558-2523', '2334-3303']
DOI: https://doi.org/10.1109/tac.2021.3122586